Parallel processing is when several computers or computing elements within a single computer can run computation at the same time (in parallel). It may occur at a fairly large scale, such as two separate computers or two cores in a single CPU. It may also occur at a fine scale, for example the way a GPU does the same operation to many streams of data similtaneously. Parallel processing is often divided into two main types SIMD (single instruction, multiple data) when the same things happens to lots of data in parallel, as in the case of the GPU; and MIMD (multiple instruction, multiple data) when different code may operate at the same time, as in the case of CPU cores.
Used in Chap. 6: pages 83, 94; Chap. 8: page 116; Chap. 12: pages 179, 187
Also known as parallelism